The quality of knowledge retrieval is crucial in knowledge-intensive conversations. Two common strategies to improve the retrieval quality are finetuning the retriever or generating a self-contained query, while they encounter heavy burdens on expensive computation and elaborate annotations. In this paper, we propose an unsupervised query enhanced approach for knowledge-intensive conversations, namely QKConv. There are three modules in QKConv: a query generator, an off-the-shelf knowledge selector, and a response generator. Without extra supervision, the end-to-end joint training of QKConv explores multiple candidate queries and utilizes corresponding selected knowledge to yield the target response. To evaluate the effectiveness of the proposed method, we conducted comprehensive experiments on conversational question-answering, task-oriented dialogue, and knowledge-grounded conversation. Experimental results demonstrate that QKConv achieves state-of-the-art performance compared to unsupervised methods and competitive performance compared to supervised methods.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Extensive empirical evidence demonstrates that conditional generative models are easier to train and perform better than unconditional ones by exploiting the labels of data. So do score-based diffusion models. In this paper, we analyze the phenomenon formally and identify that the key of conditional learning is to partition the data properly. Inspired by the analyses, we propose self-conditioned diffusion models (SCDM), which is trained conditioned on indices clustered by the k-means algorithm on the features extracted by a model pre-trained in a self-supervised manner. SCDM significantly improves the unconditional model across various datasets and achieves a record-breaking FID of 3.94 on ImageNet 64x64 without labels. Besides, SCDM achieves a slightly better FID than the corresponding conditional model on CIFAR10.
translated by 谷歌翻译
视觉变压器(VIT)在包括低水平的视觉任务中显示了有望,而U-NET在基于分数的扩散模型中仍然占主导地位。在本文中,我们对扩散模型中的基于VIT的体系结构进行了系统的经验研究。我们的结果表明,在VIT中添加超长的跳过连接(例如U-NET)对于扩散模型至关重要。新的VIT体系结构以及其他改进被称为U-Vit。在几个流行的视觉数据集中,U-Vit可以将竞争性生成结果达到SOTA U-NET,同时需要大量的参数和计算,如果不是更少。
translated by 谷歌翻译
通过社交媒体评论预先培训的许多开放域对话模型都可以产生连贯的答复,但在与真实用户互动时会产生引人入胜的答复。这种现象可能主要是由于注释的人类对话的不足以及与人类偏爱的未对准。在本文中,我们提出了一种新颖而有效的方法,以增强开放域聊天机器人,其中有两种人类反馈(包括明确的演示和隐性偏好),并利用了。通过要求注释者选择或修改模型生成的候选响应,Diamante有效地收集了人类证明的响应并构建了中国聊天数据集。为了增强与人类偏好的一致性,Diamante利用数据收集过程中的隐含偏好,并引入了生成评估联合培训。全面的实验表明,Diamante数据集和联合培训范式可以显着提高中国预训练的对话模型的性能。
translated by 谷歌翻译
深生成模型(DGM)是数据浏览的。从本质上讲,这是因为在有限数据上学习一个复杂的模型,遭受了较大的差异和容易过度的折磨。受\ emph {偏见 - 变化困境}的启发,我们提出了\ emph {正则化的深生成模型}(reg-dgm),该模型}(reg-dgm)利用了不可转移的预训练模型来减少具有有限数据的生成模型的变异。正式地,Reg-DGM优化了数据分布与DGM之间一定差异的加权总和,以及预先训练的模型W.R.T.定义的能量函数的期望。 DGM。从理论上讲,我们表征了Reg-DGM在非参数环境中全球最小值的存在和独特性,并严格证明Reg-DGM W.R.T.的统计益处。在一个简单而代表性的高斯拟合示例中,平均误差和预期风险。从经验上讲,在Reg-DGM中指定DGM和预训练的模型是非常灵活的。尤其是,使用RESNET-18分类器在ImageNet上进行了预先培训和数据依赖性能量功能,Reg-DGM始终在几个基准上改善了强大的DGM的生成性能,包括StyleGAN2和ADA在几个基准上,具有有限的数据,并为国家取得了竞争性的结果 - 艺术方法。
translated by 谷歌翻译
基于得分的扩散生成模型(SDGM)已实现了SOTA FID导致未配对的图像到图像翻译(I2i)。但是,我们注意到现有方法完全忽略了源域中的培训数据,从而导致了未配对I2i的次优解决方案。为此,我们提出了能源引导的随机微分方程(EGSDE),该方程采用了在源和目标域上鉴定的能量函数,以指导鉴定的SDE推理过程,以实现现实和忠实的不成对的I2i。在两个功能提取器的基础上,我们仔细设计了能量功能,以鼓励传输的图像保留独立于域的特征和丢弃域特异性域。此外,我们提供了EGSDE作为专家的产品的替代解释,其中三位专家(对应于SDE和两个功能提取器)中的每一个都仅有助于忠诚或现实主义。从经验上讲,我们将EGSDE与三个公认的未配对的I2I任务在四个指标下进行的大型基线进行了比较。 EGSDE不仅在几乎所有设置中都始终优于现有的基于SDGMS的方法,而且还取得了SOTA现实主义的结果​​(例如,猫在狗到狗中的65.82的FID为65.82,而在AFHQ上野生对狗的FID为59.75),而无需损害忠实的表现。
translated by 谷歌翻译
生成的开放域对话系统可以从外部知识中受益,但是缺乏外部知识资源和寻找相关知识的困难限制了该技术的发展。为此,我们使用动态服务信息提出了一个知识驱动的对话任务。具体而言,我们使用大量的服务API,可以作为外部知识来源提供高覆盖范围和时空敏感性。对话系统生成查询以请求外部服务以及用户信息,获取相关知识,并基于此知识生成响应。为了实现此方法,我们收集并发布了第一个开放式域中国服务知识对话数据集Dusinc。同时,我们构建了一个基线模型柏拉图 - 线,该模型实现了对话的自动利用。自动评估和人类评估都表明,我们提出的新方法可以显着改善开放域对话的效果,并且与对话预培训模型Plato-2相比,人类评估中的会话级总数提高了59.29%。数据集和基准模型将被开源。
translated by 谷歌翻译
对比性语言图像预处理(剪辑)受到广泛关注,因为它的学会表示形式可以很好地转移到各种下游任务上。在剪辑训练期间,Infonce目标旨在使正面图像对齐和分开的负面图像对齐。在本文中,我们在此过程中显示了表示分组的效果:Infonce客观间接通过随机出现的模式内锚将语义相似的表示形式组合在一起。我们引入了原型对比度图像预处理(原始的),以提高其效率并提高其针对模态差距的鲁棒性来增强这种分组。具体而言,原始利润在图像和文本空间之间建立了原型级别的歧视,从而有效传输了更高级别的结构知识。我们进一步提出了典型的背部翻译(PBT),以将表示形式分组与表示形式对齐,从而有效地学习了在较大的模态差距下有意义的表示。 PBT还使我们能够以更丰富的先验知识介绍其他外部教师。 ProtoClip通过在线情节培训策略进行了培训,这可以扩展到无限量的数据。结合上述新颖的设计,我们在概念标题上训练原始设计,并获得了 +5.81%的成像网线性探测改进,并且 +2.01%的Imagenet Zero Zero-shot分类改进。代码可在https://github.com/megvii-research/protoclip上找到。
translated by 谷歌翻译
基于分数的生成模型在发电质量和可能性方面具有出色的性能。他们通过将参数化的分数网络与一阶数据得分功能匹配来建模数据分布。分数网络可用于定义ODE(“基于得分的扩散ode”),以进行精确的似然评估。但是,颂歌的可能性与得分匹配目标之间的关系尚不清楚。在这项工作中,我们证明,匹配一阶得分不足以通过在最大可能性和分数匹配目标之间显示差距来最大化ode的可能性。为了填补这一空白,我们表明,可以通过控制第一,第二和三阶得分匹配错误来界定颂歌的负可能性;我们进一步提出了一种新型的高阶denoising评分匹配方法,以实现基于得分的扩散ODE的最大似然训练。我们的算法确保高阶匹配误差受训练错误和较低级错误的限制。我们从经验上观察到,通过高阶匹配,基于得分的扩散频率在合成数据和CIFAR-10上都具有更好的可能性,同时保留了高生成质量。
translated by 谷歌翻译